A. Gruffalo Index of Language Variation

We used the Gruffalo and the Gruffalo’s child (???; Donaldson, 2005), translated into various Scots dialects as a baseline for the amount of phonological and lexical variation between language varieties.

The Gruffalo

  • The Doric Gruffalo (translated by Sheena Blackhall)
  • Thi Dundee Gruffalo (translated by Matthew Fitt)
  • The Glasgow Gruffalo (translated by Elaine C. Smith)
  • The Gruffalo in Scots (translated by James Robertson)

The Gruffalo’s Child

  • The Doric Gruffalo’s Bairn (translated by Sheena Blackhall)
  • Thi Dundee Gruffalo’s Bairn (translated by Matthew Fitt)
  • The Gruffalo’s Wean (Scots; translated by James Robertson)

Note: The Gruffalo’s Child was not available in Glaswegian at the time of this corpus analysis.

B. Graphemes

Invented graphemes used to represent each phoneme in all experiments. The final two graphemes were created but not used in these experiments. To prevent participants from memorizing the novel graphemes based on resemblance to known graphemes we controlled for similarity to characters of extant writing systems by comparing each invented grapheme against the database of 11,817 characters (excluding Chinese, Korean, and Japanese) on the Shapecatcher website (Milde, 2011). If visual inspection indicated a resemblance, we modified the grapheme to minimize that resemblance.Invented graphemes used to represent each phoneme in all experiments. The final two graphemes were created but not used in these experiments. To prevent participants from memorizing the novel graphemes based on resemblance to known graphemes we controlled for similarity to characters of extant writing systems by comparing each invented grapheme against the database of 11,817 characters (excluding Chinese, Korean, and Japanese) on the Shapecatcher website (Milde, 2011). If visual inspection indicated a resemblance, we modified the grapheme to minimize that resemblance.Invented graphemes used to represent each phoneme in all experiments. The final two graphemes were created but not used in these experiments. To prevent participants from memorizing the novel graphemes based on resemblance to known graphemes we controlled for similarity to characters of extant writing systems by comparing each invented grapheme against the database of 11,817 characters (excluding Chinese, Korean, and Japanese) on the Shapecatcher website (Milde, 2011). If visual inspection indicated a resemblance, we modified the grapheme to minimize that resemblance.Invented graphemes used to represent each phoneme in all experiments. The final two graphemes were created but not used in these experiments. To prevent participants from memorizing the novel graphemes based on resemblance to known graphemes we controlled for similarity to characters of extant writing systems by comparing each invented grapheme against the database of 11,817 characters (excluding Chinese, Korean, and Japanese) on the Shapecatcher website (Milde, 2011). If visual inspection indicated a resemblance, we modified the grapheme to minimize that resemblance.Invented graphemes used to represent each phoneme in all experiments. The final two graphemes were created but not used in these experiments. To prevent participants from memorizing the novel graphemes based on resemblance to known graphemes we controlled for similarity to characters of extant writing systems by comparing each invented grapheme against the database of 11,817 characters (excluding Chinese, Korean, and Japanese) on the Shapecatcher website (Milde, 2011). If visual inspection indicated a resemblance, we modified the grapheme to minimize that resemblance.Invented graphemes used to represent each phoneme in all experiments. The final two graphemes were created but not used in these experiments. To prevent participants from memorizing the novel graphemes based on resemblance to known graphemes we controlled for similarity to characters of extant writing systems by comparing each invented grapheme against the database of 11,817 characters (excluding Chinese, Korean, and Japanese) on the Shapecatcher website (Milde, 2011). If visual inspection indicated a resemblance, we modified the grapheme to minimize that resemblance.Invented graphemes used to represent each phoneme in all experiments. The final two graphemes were created but not used in these experiments. To prevent participants from memorizing the novel graphemes based on resemblance to known graphemes we controlled for similarity to characters of extant writing systems by comparing each invented grapheme against the database of 11,817 characters (excluding Chinese, Korean, and Japanese) on the Shapecatcher website (Milde, 2011). If visual inspection indicated a resemblance, we modified the grapheme to minimize that resemblance.Invented graphemes used to represent each phoneme in all experiments. The final two graphemes were created but not used in these experiments. To prevent participants from memorizing the novel graphemes based on resemblance to known graphemes we controlled for similarity to characters of extant writing systems by comparing each invented grapheme against the database of 11,817 characters (excluding Chinese, Korean, and Japanese) on the Shapecatcher website (Milde, 2011). If visual inspection indicated a resemblance, we modified the grapheme to minimize that resemblance.Invented graphemes used to represent each phoneme in all experiments. The final two graphemes were created but not used in these experiments. To prevent participants from memorizing the novel graphemes based on resemblance to known graphemes we controlled for similarity to characters of extant writing systems by comparing each invented grapheme against the database of 11,817 characters (excluding Chinese, Korean, and Japanese) on the Shapecatcher website (Milde, 2011). If visual inspection indicated a resemblance, we modified the grapheme to minimize that resemblance.Invented graphemes used to represent each phoneme in all experiments. The final two graphemes were created but not used in these experiments. To prevent participants from memorizing the novel graphemes based on resemblance to known graphemes we controlled for similarity to characters of extant writing systems by comparing each invented grapheme against the database of 11,817 characters (excluding Chinese, Korean, and Japanese) on the Shapecatcher website (Milde, 2011). If visual inspection indicated a resemblance, we modified the grapheme to minimize that resemblance.Invented graphemes used to represent each phoneme in all experiments. The final two graphemes were created but not used in these experiments. To prevent participants from memorizing the novel graphemes based on resemblance to known graphemes we controlled for similarity to characters of extant writing systems by comparing each invented grapheme against the database of 11,817 characters (excluding Chinese, Korean, and Japanese) on the Shapecatcher website (Milde, 2011). If visual inspection indicated a resemblance, we modified the grapheme to minimize that resemblance.Invented graphemes used to represent each phoneme in all experiments. The final two graphemes were created but not used in these experiments. To prevent participants from memorizing the novel graphemes based on resemblance to known graphemes we controlled for similarity to characters of extant writing systems by comparing each invented grapheme against the database of 11,817 characters (excluding Chinese, Korean, and Japanese) on the Shapecatcher website (Milde, 2011). If visual inspection indicated a resemblance, we modified the grapheme to minimize that resemblance.Invented graphemes used to represent each phoneme in all experiments. The final two graphemes were created but not used in these experiments. To prevent participants from memorizing the novel graphemes based on resemblance to known graphemes we controlled for similarity to characters of extant writing systems by comparing each invented grapheme against the database of 11,817 characters (excluding Chinese, Korean, and Japanese) on the Shapecatcher website (Milde, 2011). If visual inspection indicated a resemblance, we modified the grapheme to minimize that resemblance.Invented graphemes used to represent each phoneme in all experiments. The final two graphemes were created but not used in these experiments. To prevent participants from memorizing the novel graphemes based on resemblance to known graphemes we controlled for similarity to characters of extant writing systems by comparing each invented grapheme against the database of 11,817 characters (excluding Chinese, Korean, and Japanese) on the Shapecatcher website (Milde, 2011). If visual inspection indicated a resemblance, we modified the grapheme to minimize that resemblance.Invented graphemes used to represent each phoneme in all experiments. The final two graphemes were created but not used in these experiments. To prevent participants from memorizing the novel graphemes based on resemblance to known graphemes we controlled for similarity to characters of extant writing systems by comparing each invented grapheme against the database of 11,817 characters (excluding Chinese, Korean, and Japanese) on the Shapecatcher website (Milde, 2011). If visual inspection indicated a resemblance, we modified the grapheme to minimize that resemblance.

Invented graphemes used to represent each phoneme in all experiments. The final two graphemes were created but not used in these experiments. To prevent participants from memorizing the novel graphemes based on resemblance to known graphemes we controlled for similarity to characters of extant writing systems by comparing each invented grapheme against the database of 11,817 characters (excluding Chinese, Korean, and Japanese) on the Shapecatcher website (Milde, 2011). If visual inspection indicated a resemblance, we modified the grapheme to minimize that resemblance.

C. Word List

Word list used in all conditions.
Spelling
Pronunciation
Base Word Inconsistent Spelling Noncontrastive Contrastive
Training Words
kublE kubnE kublE xublE
skEfi skEfi skEfi sxifi
snid fnid snid sni
smadu smadu smadu smOdu
slOku fnOku slOku slOxu
nal nal nal nOl
nEsk nEsk nEsk nisx
nEf nEf nEf nif
flEsOd flEsOd flEsOd flisO
dasmu dasmu dasmu dOsmu
daf daf daf dOf
balf balf balf bOlf
blaf bnaf blaf blOf
blEkus bnEkus blEkus blixus
bEsmi bEsmi bEsmi bismi
fub fub fub
mif mif mif
lOm lOm lOm
snOf fnOf snOf
blim bnim blim
flOb flOb flOb
mOls mOls mOls
fOns fOns fOns
nifs nifs nifs
nOflE nOflE nOflE
dEsna dEfna dEsna
smiba smiba smiba
flidu flidu flidu
snibOl fnibOl snibOl
slinab fninab slinab
Testing Words
mab mab mab
skub skub skub
klEb klEb klEb
dOlk dOlk dOlk
suld suld suld
dikla dikla dikla
luskO luskO luskO
klufE klufE klufE
klOda klOda klOda
skOnEf skOnEf skOnEf
klusim klusim klusim
flabun flabun flabun
Note.
All conditions used the Inconsistent spelling. The standard condition used the noncontrastive pronunciations in exposure, training, and testing phases.
The dialect condition used contrastive pronunciations during exposure and non-contrastive pronunciations during training and testing, with the exception of the Dialect Literacy condition, where words were pronounced in training using noncontrastive and contrastive pronunciations separately.
Words are presented according to the CPSAMPA coding convention for simplified IPA characters.

D. Image Norms

Images used from the (Rossion & Pourtois, 2004) colourised Vanderwart picture set and their related norming results.

Our subset of items are as follows:

  • Body part: Finger, foot, eye, hand, nose, arm, ear.

  • Furniture and kitchen utensils: Chair, glass, bed, fork, spoon, pot, desk.

  • Household objects, tools, and instruments: Television, toothbrush, book, pen, refrigerator, watch, pencil.

  • Food and clothing: Pants, socks, shirt, sweater, apple, tomato, potato.

  • Buildings, building features, and vehicles: Door, house, window, car, doorknob, truck, bicycle. Animals and plants: Tree, dog, cat, flower, rabbit, duck, chicken.

The subset of pictures and their associated norms are provided in the supplemental material in Williams, Panayotov, & Kempe (2020) at: https://osf.io/5mtdj/.

E. Model Priors and Posterior Predictive Checks

Priors for the fitted models are described first by their expected distribution, and the parameters that define that distribution. For example, a prior of \(\mathcal{N}(0, 1)\) describes a normal distribution with a mean of 0 and a standard deviation of 1. Similarly, a prior of \(\mathcal{logistic}(0, 1)\) describes a logistic distribution with a mean of 0 and a standard deviation of 1. Note, by default, brms restricts priors on the SD to be positive.

Weakly informative regularising priors were used for all terms. All priors were centred on 0, with standard deviations ranging from 0.5 to 10, thus allowing for a range of values with less prior probability places on extreme responses. For the slope terms, the priors assume no effect to small effects for each parameter in either direction. Weakly informative regularising priors were also used for all standard deviation terms. Finally, an \(LKJ(2)\) prior was used for the correlation between terms, which acts to down-weight perfect correlations (Vasishth, Mertzen, Jäger, & Gelman, 2018). These priors are in some cases more informative than initially planned following our pre-registration (which used very weakly informative priors) to improve model fit (i.e. accounting for divergences during fitting). For example, the mu intercept and slope and gamma slope have standard deviations half as large as planned, while the standard deviation for the phi intercept is three times as large as planned. Additionally, 8000 iterations were used instead of 1000 and 6 chains were used rather than 4 chains to improve estimates in response to warnings about bulk and tail effective sample size, totalling 48,000 samples rather than the planned 4000.

The following priors were used for the exposure model:

  • Intercept
    • \(\mu\): \(\mathcal{N}(0, 5)\)
    • \(\phi\): \(\mathcal{N}(0, 3)\)
    • \(\alpha\): \(\mathcal{logistic}(0, 1)\)
    • \(\gamma\): \(\mathcal{logistic}(0, 1)\)
  • Slope
    • \(\mu\): \(\mathcal{N}(0, 0.5)\)
    • \(\phi\): \(\mathcal{N}(0, 1)\)
    • \(\alpha\): \(\mathcal{N}(0, 5)\)
    • \(\gamma\): \(\mathcal{N}(0, 0.5)\)
  • SD
    • \(\mu\): \(\mathcal{N}(0, 1)\)
    • \(\phi\): \(\mathcal{N}(0, 1)\)
    • \(\alpha\): \(\mathcal{N}(0, 5)\)
    • \(\gamma\): \(\mathcal{N}(0, 5)\)
  • SD by Participant Number
    • \(\mu\): \(\mathcal{N}(0, 1)\)
    • \(\phi\): \(\mathcal{N}(0, 5)\)
    • \(\alpha\): \(\mathcal{N}(0, 10)\)
    • \(\gamma\): \(\mathcal{N}(0, 10)\)
  • SD by Item
    • \(\mu\): \(\mathcal{N}(0, 1)\)
    • \(\phi\): \(\mathcal{N}(0, 5)\)
    • \(\alpha\): \(\mathcal{N}(0, 10)\)
    • \(\gamma\): \(\mathcal{N}(0, 10)\)
  • Correlation
    • \(LKJ(2)\)

For both testing models, the following priors were used:

  • Intercept
    • \(\mu\): \(\mathcal{N}(0, 5)\)
    • \(\phi\): \(\mathcal{N}(0, 3)\)
    • \(\alpha\): \(\mathcal{logistic}(0, 1)\)
    • \(\gamma\): \(\mathcal{logistic}(0, 1)\)
  • Slope
    • \(\mu\): \(\mathcal{N}(0, 1)\)
    • \(\phi\): \(\mathcal{N}(0, 1)\)
    • \(\alpha\): \(\mathcal{N}(0, 5)\)
    • \(\gamma\): \(\mathcal{N}(0, 1)\)
  • SD
    • \(\mu\): \(\mathcal{N}(0, 1)\)
    • \(\phi\): \(\mathcal{N}(0, 1)\)
    • \(\alpha\): \(\mathcal{N}(0, 5)\)
    • \(\gamma\): \(\mathcal{N}(0, 5)\)
  • SD by Participant Number
    • \(\mu\): \(\mathcal{N}(0, 1)\)
    • \(\phi\): \(\mathcal{N}(0, 5)\)
    • \(\alpha\): \(\mathcal{N}(0, 10)\)
    • \(\gamma\): \(\mathcal{N}(0, 10)\)
  • SD by Item
    • \(\mu\): \(\mathcal{N}(0, 1)\)
    • \(\phi\): \(\mathcal{N}(0, 5)\)
    • \(\alpha\): \(\mathcal{N}(0, 10)\)
    • \(\gamma\): \(\mathcal{N}(0, 10)\)
  • Correlation
    • \(LKJ(2)\)

Due to having more observations for analyses during the testing phase, both the \(\mu\) and \(\gamma\) slope terms use more weakly informative priors than the exposure model. This allows the data to have a larger impact on parameter estimates while having no impact on model convergence.

Posterior predictive checks were performed for all three models, comparing the observed posterior density against samples from the fitted model. Well fitting models show concordance between observed and sampled posterior densities. Plots for each model are displayed below. Grey lines indicate samples from the posterior, while black lines indicate the observed sample density.

As can be seen from the plots, the posterior predictive checks indicate a generally good model fit in all instances, such that the model largely captures the shape of the data (i.e. especially capturing the 0 and 1 inflation in the testing model), but does not capture some discrepancies in the data which do not arise from any defined process (i.e. some larger densities in the testing model between the range of 0-1).

F. Fitted Model Summaries

In the tables of population level (fixed) effects, \(\hat{R}\) is a measure of convergence for within- and between-chain estimates, with values closer to 1 being preferable. The bulk and tail effective sample sizes give diagnostics of the number of draws which contain the same amount of information as the dependent sample (Vehtari et al., 2020), with higher values being preferable. The tail effective sample size is determined at the 5% and 95% quantiles, while the bulk is determined at values in between these quantiles.

Vocabulary Test Model

A summary of the population-level (fixed) effects for the Vocabulary Test model is provided below. This can be used to determine model diagnostics, coefficients, and estimates around these coefficients using 95% credible intervals.

Note that models were fitted with the above priors and sum coded effects of exposure condition, word type, and task. As a result, the intercept represents the grand mean (i.e. the mean of the means of the dependent variable at each level of the categorical variables). The regression cofficients then represent the difference between the grand mean and the:

  • Exposure Condition:
    • No Dialect condition.
    • Dialect condition.
    • Dialect & Social condition.

    To get parameter estimates for the No Dialect, Dialect, and Dialect & Social conditions add their regression coefficients to the intercept. To get regression estimates for the Dialect Literacy condition, all three Exposure condition coefficients must be subtracted from the intercept.

  • Word Type: Contrastive words.

    To get parameter estimates for contrastive words, add the coefficient to the intercept. To get the estimates for non-contrastive words, subtract the coefficient from the intercept.

  • Task: Reading.

    To get parameter estimates for the reading task, add the coefficient to the intercept. To get the estimates for the spelling task, subtract the coefficient from the intercept.

Parameter Estimate Est.error Interval Rhat Bulk Ess Tail Ess
\(\mu\)
Intercept 0.341 0.029 [0.284, 0.396] 1.001 7824 12363
Dialect 0.027 0.037 [-0.046, 0.098] 1.000 6736 11607
Dialect & Social -0.042 0.037 [-0.115, 0.031] 1.001 7308 12508
Dialect Literacy 0.002 0.036 [-0.070, 0.074] 1.000 6644 11987
Word Type 0.030 0.022 [-0.013, 0.074] 1.000 10320 15405
Dialect \(\times\) Word Type -0.002 0.021 [-0.043, 0.039] 1.000 30108 19005
Dialect & Social \(\times\) Word Type -0.002 0.021 [-0.044, 0.039] 1.000 28808 19137
Dialect Literacy \(\times\) Word Type 0.002 0.020 [-0.038, 0.042] 1.000 31508 19390
\(\phi\)
Intercept 1.819 0.033 [1.756, 1.884] 1.002 5350 13365
Dialect -0.011 0.041 [-0.093, 0.069] 1.000 12336 18285
Dialect & Social -0.055 0.041 [-0.136, 0.026] 1.000 12602 16852
Dialect Literacy 0.016 0.040 [-0.063, 0.094] 1.000 13794 17082
Word Type 0.004 0.029 [-0.054, 0.062] 1.000 10054 17796
Dialect \(\times\) Word Type -0.025 0.037 [-0.099, 0.047] 1.000 29030 19408
Dialect & Social \(\times\) Word Type 0.006 0.036 [-0.065, 0.076] 1.000 29955 19532
Dialect Literacy \(\times\) Word Type 0.040 0.035 [-0.029, 0.110] 1.000 31022 20460
\(\alpha\)
Intercept -0.140 0.099 [-0.336, 0.052] 1.000 15525 17181
Dialect 0.089 0.088 [-0.084, 0.261] 1.000 13530 17076
Dialect & Social 0.047 0.085 [-0.120, 0.213] 1.001 13206 16313
Dialect Literacy -0.098 0.084 [-0.264, 0.067] 1.000 13104 15637
Word Type -0.075 0.088 [-0.250, 0.095] 1.000 14507 15504
Dialect \(\times\) Word Type -0.008 0.050 [-0.107, 0.091] 1.000 25967 18558
Dialect & Social \(\times\) Word Type -0.004 0.044 [-0.090, 0.083] 1.000 31103 19435
Dialect Literacy \(\times\) Word Type 0.049 0.045 [-0.040, 0.138] 1.000 29087 18243
\(\gamma\)
Intercept 1.152 0.200 [0.761, 1.550] 1.000 11742 16173
Dialect 0.190 0.216 [-0.232, 0.615] 1.000 11829 16574
Dialect & Social -0.259 0.192 [-0.638, 0.118] 1.000 9681 14910
Dialect Literacy 0.065 0.196 [-0.319, 0.448] 1.000 9621 14270
Word Type 0.216 0.160 [-0.102, 0.532] 1.000 15056 16693
Dialect \(\times\) Word Type -0.048 0.140 [-0.325, 0.227] 1.000 21307 18614
Dialect & Social \(\times\) Word Type -0.074 0.086 [-0.243, 0.094] 1.000 31341 17516
Dialect Literacy \(\times\) Word Type 0.007 0.095 [-0.179, 0.196] 1.000 25910 19286

Testing Phase Model

A summary of the Testing Phase model is provided below. This can be used to determine model diagnostics and coefficients.

The same coding was used here as in the Vocabulary Test model, with the exception that word type has three levels in this phase of the experiment. Words can be contrastive, non-contrastive, or untrained. Thus, word type was Helmert coded in R (R defaults to what is traditionally called reverse Helmert coding) such that the first estimate (Word Type in the below table) represents half the difference in scores between trained non-contrastive words and trained contrastive words. The second estimate (Word Familiarity in the below table) represents the difference in scores between the mean of the trained (non-contrastive and contrastive words) and untrained, novel words and the mean of the trained words. Thus, this estimates the effect of words being untrained vs. trained.

Thus, as before the intercept represents the grand mean. To get parameter estimates for contrastive words, subtract the parameter estimate for Word Type and the parameter estimate for Word Familiairty from the intercept. To get parameter estimates for non-contrastive words, add the parameter estimate for Word Type and subtract the parameter estimate for Word Familiarity from the intercept. To get parameter estimates for novel, unfamiliar words, subrtract both the parameter estimates for Word Type and Word Familiarity from the intercept.

Parameter Estimate Est.error Interval Rhat Bulk Ess Tail Ess
\(\mu\)
Intercept -0.393 0.039 [-0.470, -0.317] 1.009 872 2362
Task -0.057 0.009 [-0.075, -0.040] 1.002 4293 10471
Dialect -0.030 0.056 [-0.138, 0.080] 1.009 760 1537
Dialect & Social -0.025 0.056 [-0.134, 0.083] 1.005 976 1969
Dialect Literacy 0.028 0.055 [-0.081, 0.134] 1.007 882 1478
Word Type -0.003 0.022 [-0.046, 0.039] 1.002 3844 7230
Word Familiarity -0.002 0.014 [-0.030, 0.026] 1.002 3580 7359
Task \(\times\) Dialect 0.007 0.015 [-0.022, 0.037] 1.001 4333 9246
Task \(\times\) Dialect & Social 0.021 0.015 [-0.009, 0.050] 1.000 5496 11559
Task \(\times\) Dialect Literacy -0.011 0.015 [-0.040, 0.018] 1.001 4634 11692
Task \(\times\) Word Type -0.007 0.006 [-0.018, 0.006] 1.001 27098 16538
Task \(\times\) Word Familiarity 0.010 0.004 [0.003, 0.017] 1.000 23157 17466
Dialect \(\times\) Word Type 0.010 0.011 [-0.012, 0.031] 1.000 20350 18569
Dialect & Social \(\times\) Word Type 0.021 0.011 [-0.002, 0.043] 1.000 19552 16539
Dialect Literacy \(\times\) Word Type -0.006 0.011 [-0.028, 0.016] 1.000 21319 19307
Dialect \(\times\) Word Familiarity -0.007 0.009 [-0.025, 0.011] 1.003 2782 7124
Dialect & Social \(\times\) Word Familiarity 0.001 0.009 [-0.018, 0.019] 1.001 3279 8546
Dialect Literacy \(\times\) Word Familiarity -0.004 0.010 [-0.022, 0.015] 1.001 3022 7788
Task \(\times\) Dialect \(\times\) Word Type 0.005 0.011 [-0.015, 0.026] 1.000 22200 18142
Task \(\times\) Dialect & Social \(\times\) Word Type 0.005 0.010 [-0.015, 0.026] 1.000 20607 18319
Task \(\times\) Dialect Literacy \(\times\) Word Type -0.013 0.010 [-0.033, 0.008] 1.000 23328 18455
Task \(\times\) Dialect \(\times\) Word Familiarity -0.006 0.006 [-0.018, 0.006] 1.000 22972 18686
Task \(\times\) Dialect & Social \(\times\) Word Familiarity -0.004 0.006 [-0.017, 0.008] 1.000 21835 19248
Task \(\times\) Dialect Literacy \(\times\) Word Familiarity 0.002 0.006 [-0.011, 0.014] 1.000 21982 19288
\(\phi\)
Intercept 2.643 0.044 [2.558, 2.731] 1.003 2173 5803
Task -0.184 0.020 [-0.222, -0.146] 1.000 10289 17192
Dialect 0.065 0.059 [-0.052, 0.179] 1.003 2015 3984
Dialect & Social -0.031 0.059 [-0.146, 0.084] 1.002 2006 4967
Dialect Literacy 0.011 0.058 [-0.101, 0.127] 1.002 2415 5978
Word Type -0.003 0.033 [-0.069, 0.061] 1.000 8299 12907
Word Familiarity 0.095 0.023 [0.050, 0.140] 1.001 7498 12119
Task \(\times\) Dialect 0.022 0.031 [-0.039, 0.084] 1.001 9083 13062
Task \(\times\) Dialect & Social 0.009 0.032 [-0.054, 0.071] 1.000 8361 14215
Task \(\times\) Dialect Literacy -0.039 0.031 [-0.100, 0.024] 1.000 7657 13387
Task \(\times\) Word Type -0.017 0.014 [-0.045, 0.011] 1.000 24341 18144
Task \(\times\) Word Familiarity 0.009 0.009 [-0.009, 0.028] 1.000 20434 16978
Dialect \(\times\) Word Type 0.024 0.026 [-0.028, 0.074] 1.000 19798 17214
Dialect & Social \(\times\) Word Type -0.020 0.025 [-0.069, 0.029] 1.000 20696 17910
Dialect Literacy \(\times\) Word Type -0.024 0.027 [-0.077, 0.030] 1.000 17569 17857
Dialect \(\times\) Word Familiarity 0.019 0.020 [-0.020, 0.059] 1.000 12286 16528
Dialect & Social \(\times\) Word Familiarity -0.026 0.020 [-0.065, 0.012] 1.000 12205 16205
Dialect Literacy \(\times\) Word Familiarity -0.001 0.021 [-0.042, 0.041] 1.000 12006 14273
Task \(\times\) Dialect \(\times\) Word Type 0.047 0.024 [-0.001, 0.094] 1.000 22267 18108
Task \(\times\) Dialect & Social \(\times\) Word Type -0.001 0.024 [-0.048, 0.047] 1.000 21631 18927
Task \(\times\) Dialect Literacy \(\times\) Word Type -0.001 0.024 [-0.048, 0.046] 1.000 22569 18027
Task \(\times\) Dialect \(\times\) Word Familiarity 0.003 0.016 [-0.028, 0.033] 1.000 22592 18688
Task \(\times\) Dialect & Social \(\times\) Word Familiarity 0.004 0.016 [-0.027, 0.035] 1.000 19848 17845
Task \(\times\) Dialect Literacy \(\times\) Word Familiarity -0.013 0.016 [-0.042, 0.018] 1.000 21878 18709
\(\alpha\)
Intercept -0.322 0.127 [-0.571, -0.070] 1.002 4357 9644
Task 0.276 0.027 [0.224, 0.329] 1.000 5887 12534
Dialect 0.030 0.111 [-0.189, 0.247] 1.002 2390 4549
Dialect & Social 0.027 0.112 [-0.192, 0.246] 1.001 2437 6047
Dialect Literacy 0.009 0.111 [-0.206, 0.229] 1.002 3048 6900
Word Type 0.129 0.128 [-0.123, 0.383] 1.000 8811 12646
Word Familiarity -0.019 0.081 [-0.179, 0.139] 1.001 8079 13276
Task \(\times\) Dialect 0.076 0.047 [-0.015, 0.167] 1.001 6314 12914
Task \(\times\) Dialect & Social 0.001 0.047 [-0.091, 0.092] 1.000 6271 12575
Task \(\times\) Dialect Literacy 0.000 0.046 [-0.090, 0.091] 1.001 6217 10116
Task \(\times\) Word Type 0.077 0.018 [0.043, 0.111] 1.000 33338 18081
Task \(\times\) Word Familiarity -0.026 0.011 [-0.048, -0.005] 1.000 32607 18646
Dialect \(\times\) Word Type -0.071 0.038 [-0.145, 0.004] 1.000 15363 16866
Dialect & Social \(\times\) Word Type -0.013 0.037 [-0.085, 0.060] 1.000 16414 17588
Dialect Literacy \(\times\) Word Type -0.084 0.037 [-0.157, -0.013] 1.000 15749 16738
Dialect \(\times\) Word Familiarity -0.046 0.029 [-0.104, 0.010] 1.000 10467 14463
Dialect & Social \(\times\) Word Familiarity -0.016 0.029 [-0.072, 0.040] 1.000 9434 14577
Dialect Literacy \(\times\) Word Familiarity 0.066 0.028 [0.010, 0.121] 1.000 9835 15193
Task \(\times\) Dialect \(\times\) Word Type -0.078 0.030 [-0.137, -0.019] 1.000 24660 18592
Task \(\times\) Dialect & Social \(\times\) Word Type 0.033 0.030 [-0.026, 0.091] 1.000 25700 18775
Task \(\times\) Dialect Literacy \(\times\) Word Type 0.007 0.030 [-0.052, 0.067] 1.000 24559 18031
Task \(\times\) Dialect \(\times\) Word Familiarity 0.013 0.020 [-0.024, 0.052] 1.000 26206 18406
Task \(\times\) Dialect & Social \(\times\) Word Familiarity -0.030 0.019 [-0.068, 0.008] 1.000 26062 18774
Task \(\times\) Dialect Literacy \(\times\) Word Familiarity 0.024 0.019 [-0.014, 0.061] 1.000 26449 18581
\(\gamma\)
Intercept -3.865 0.395 [-4.663, -3.129] 1.004 1395 4218
Task -0.336 0.215 [-0.741, 0.106] 1.001 7653 11945
Dialect -0.498 0.479 [-1.427, 0.448] 1.007 998 2146
Dialect & Social 0.067 0.478 [-0.873, 0.994] 1.004 1216 2812
Dialect Literacy 0.213 0.471 [-0.714, 1.125] 1.006 1131 2431
Word Type -0.450 0.213 [-0.863, -0.026] 1.000 12521 15233
Word Familiarity 0.101 0.191 [-0.298, 0.455] 1.000 7840 11954
Task \(\times\) Dialect 0.453 0.229 [0.006, 0.907] 1.001 5693 14185
Task \(\times\) Dialect & Social -0.296 0.222 [-0.734, 0.140] 1.000 5655 13101
Task \(\times\) Dialect Literacy 0.041 0.229 [-0.414, 0.488] 1.001 6168 13407
Task \(\times\) Word Type -0.004 0.095 [-0.189, 0.185] 1.000 20633 17428
Task \(\times\) Word Familiarity 0.123 0.087 [-0.051, 0.293] 1.000 20958 18994
Dialect \(\times\) Word Type 0.158 0.176 [-0.186, 0.502] 1.000 16019 16929
Dialect & Social \(\times\) Word Type 0.210 0.172 [-0.128, 0.545] 1.000 15381 17442
Dialect Literacy \(\times\) Word Type 0.013 0.182 [-0.342, 0.369] 1.000 15063 16743
Dialect \(\times\) Word Familiarity -0.116 0.184 [-0.477, 0.241] 1.001 8137 13815
Dialect & Social \(\times\) Word Familiarity 0.046 0.172 [-0.292, 0.385] 1.000 7207 12831
Dialect Literacy \(\times\) Word Familiarity 0.044 0.189 [-0.331, 0.416] 1.001 7974 13449
Task \(\times\) Dialect \(\times\) Word Type 0.175 0.148 [-0.116, 0.466] 1.000 20221 18302
Task \(\times\) Dialect & Social \(\times\) Word Type 0.018 0.144 [-0.265, 0.302] 1.000 18800 18348
Task \(\times\) Dialect Literacy \(\times\) Word Type -0.055 0.153 [-0.360, 0.245] 1.000 17793 17422
Task \(\times\) Dialect \(\times\) Word Familiarity 0.498 0.149 [0.211, 0.798] 1.000 15651 15454
Task \(\times\) Dialect & Social \(\times\) Word Familiarity -0.390 0.133 [-0.661, -0.139] 1.000 16184 17639
Task \(\times\) Dialect Literacy \(\times\) Word Familiarity -0.033 0.153 [-0.334, 0.268] 1.000 16146 16892

Exploratory Covariate Testing Model

A summary of the Testing Phase model incorporating the mean scores in the vocabulary test as a covariate is provided below. This can be used to determine model diagnostics and coefficients. This model used the same coding scheme as in the Testing Phase model, but included a continuous numberical predictor of mean vocabulary test performance (ranging from 0-1).

Parameter Estimate Est.error Interval Rhat Bulk Ess Tail Ess
\(\mu\)
Intercept -0.693 0.086 [-0.861, -0.523] 1.004 1916 3997
Mean Vocabulary Test nLED 0.482 0.121 [0.243, 0.720] 1.002 2498 5228
Task -0.098 0.031 [-0.159, -0.038] 1.002 6745 11373
Dialect -0.081 0.103 [-0.283, 0.122] 1.000 2563 5344
Dialect & Social -0.059 0.102 [-0.257, 0.140] 1.003 2621 5524
Dialect Literacy -0.027 0.106 [-0.235, 0.177] 1.003 3048 6633
Word Type 0.047 0.042 [-0.035, 0.129] 1.001 6909 11300
Word Familiarity 0.041 0.028 [-0.015, 0.096] 1.002 5576 9887
Mean Vocabulary Test nLED \(\times\) Task 0.065 0.047 [-0.027, 0.156] 1.002 6841 11173
Mean Vocabulary Test nLED \(\times\) Dialect 0.043 0.148 [-0.246, 0.330] 1.001 2947 6340
Mean Vocabulary Test nLED \(\times\) Dialect & Social 0.091 0.152 [-0.211, 0.385] 1.001 3080 6242
Mean Vocabulary Test nLED \(\times\) Dialect Literacy 0.086 0.155 [-0.220, 0.388] 1.001 3756 6996
Task \(\times\) Dialect -0.013 0.050 [-0.112, 0.085] 1.000 7520 12366
Task \(\times\) Dialect & Social 0.129 0.047 [0.037, 0.220] 1.001 7490 12189
Task \(\times\) Dialect Literacy -0.020 0.053 [-0.124, 0.086] 1.001 7289 11955
Mean Vocabulary Test nLED \(\times\) Word Type -0.074 0.051 [-0.173, 0.026] 1.000 8413 13230
Mean Vocabulary Test nLED \(\times\) Word Familiarity -0.069 0.036 [-0.138, 0.003] 1.001 6391 11737
Task \(\times\) Word Type 0.026 0.024 [-0.021, 0.072] 1.000 13711 16671
Task \(\times\) Word Familiarity 0.026 0.014 [-0.001, 0.053] 1.000 13588 16217
Dialect \(\times\) Word Type 0.045 0.043 [-0.039, 0.129] 1.000 8974 14041
Dialect & Social \(\times\) Word Type 0.011 0.038 [-0.064, 0.085] 1.000 9970 13728
Dialect Literacy \(\times\) Word Type 0.056 0.044 [-0.030, 0.144] 1.000 10027 15117
Dialect \(\times\) Word Familiarity -0.010 0.030 [-0.069, 0.047] 1.001 6754 12410
Dialect & Social \(\times\) Word Familiarity -0.024 0.028 [-0.079, 0.031] 1.001 7171 12105
Dialect Literacy \(\times\) Word Familiarity 0.029 0.033 [-0.035, 0.094] 1.000 7252 13006
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect 0.033 0.074 [-0.113, 0.178] 1.000 7556 12263
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect & Social -0.172 0.073 [-0.313, -0.029] 1.001 7498 12616
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect Literacy 0.012 0.080 [-0.146, 0.169] 1.001 7221 11443
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Word Type -0.048 0.036 [-0.118, 0.021] 1.000 14242 16701
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Word Familiarity -0.025 0.020 [-0.065, 0.015] 1.000 14006 16024
Mean Vocabulary Test nLED \(\times\) Dialect \(\times\) Word Type -0.051 0.062 [-0.174, 0.071] 1.000 9228 14135
Mean Vocabulary Test nLED \(\times\) Dialect & Social \(\times\) Word Type 0.015 0.058 [-0.098, 0.130] 1.000 10637 14973
Mean Vocabulary Test nLED \(\times\) Dialect Literacy \(\times\) Word Type -0.097 0.065 [-0.227, 0.029] 1.000 10299 15074
Mean Vocabulary Test nLED \(\times\) Dialect \(\times\) Word Familiarity 0.007 0.043 [-0.078, 0.093] 1.001 7328 12247
Mean Vocabulary Test nLED \(\times\) Dialect & Social \(\times\) Word Familiarity 0.038 0.043 [-0.047, 0.121] 1.000 7262 12548
Mean Vocabulary Test nLED \(\times\) Dialect Literacy \(\times\) Word Familiarity -0.052 0.049 [-0.149, 0.044] 1.000 7571 13132
Task \(\times\) Dialect \(\times\) Word Type 0.064 0.042 [-0.019, 0.146] 1.001 8799 13967
Task \(\times\) Dialect & Social \(\times\) Word Type 0.016 0.038 [-0.057, 0.089] 1.001 9765 14421
Task \(\times\) Dialect Literacy \(\times\) Word Type -0.062 0.043 [-0.146, 0.023] 1.000 10159 14892
Task \(\times\) Dialect \(\times\) Word Familiarity -0.018 0.023 [-0.063, 0.028] 1.000 9589 13660
Task \(\times\) Dialect & Social \(\times\) Word Familiarity -0.014 0.022 [-0.056, 0.029] 1.001 10556 14908
Task \(\times\) Dialect Literacy \(\times\) Word Familiarity -0.006 0.026 [-0.057, 0.045] 1.000 10195 15080
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect \(\times\) Word Type -0.086 0.062 [-0.206, 0.036] 1.001 9137 13855
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect & Social \(\times\) Word Type -0.017 0.057 [-0.129, 0.094] 1.001 10359 14624
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect Literacy \(\times\) Word Type 0.078 0.064 [-0.049, 0.203] 1.000 10506 15284
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect \(\times\) Word Familiarity 0.019 0.034 [-0.048, 0.086] 1.000 9598 14866
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect & Social \(\times\) Word Familiarity 0.015 0.034 [-0.050, 0.081] 1.000 10972 15294
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect Literacy \(\times\) Word Familiarity 0.011 0.039 [-0.066, 0.088] 1.000 10233 14349
\(\phi\)
Intercept 2.882 0.113 [2.661, 3.106] 1.001 5142 9843
Mean Vocabulary Test nLED -0.362 0.165 [-0.683, -0.039] 1.000 5370 10626
Task -0.316 0.069 [-0.452, -0.182] 1.001 8696 13997
Dialect 0.037 0.148 [-0.254, 0.324] 1.001 5402 9940
Dialect & Social 0.100 0.141 [-0.176, 0.375] 1.001 6107 10795
Dialect Literacy -0.183 0.153 [-0.483, 0.117] 1.002 5305 9197
Word Type -0.015 0.064 [-0.141, 0.110] 1.000 12885 15759
Word Familiarity 0.129 0.049 [0.035, 0.226] 1.000 9741 14882
Mean Vocabulary Test nLED \(\times\) Task 0.207 0.103 [0.004, 0.410] 1.001 8274 13721
Mean Vocabulary Test nLED \(\times\) Dialect 0.064 0.216 [-0.358, 0.485] 1.000 5342 10110
Mean Vocabulary Test nLED \(\times\) Dialect & Social -0.229 0.214 [-0.649, 0.192] 1.001 6385 10913
Mean Vocabulary Test nLED \(\times\) Dialect Literacy 0.308 0.228 [-0.145, 0.755] 1.002 5711 9797
Task \(\times\) Dialect -0.010 0.105 [-0.217, 0.195] 1.000 7508 13845
Task \(\times\) Dialect & Social -0.197 0.102 [-0.397, 0.004] 1.001 8000 13761
Task \(\times\) Dialect Literacy 0.063 0.109 [-0.151, 0.277] 1.000 7938 13728
Mean Vocabulary Test nLED \(\times\) Word Type 0.010 0.084 [-0.157, 0.175] 1.000 15805 17652
Mean Vocabulary Test nLED \(\times\) Word Familiarity -0.054 0.066 [-0.183, 0.074] 1.000 10730 14473
Task \(\times\) Word Type -0.037 0.057 [-0.149, 0.076] 1.000 14295 16702
Task \(\times\) Word Familiarity 0.001 0.036 [-0.070, 0.071] 1.000 13503 15904
Dialect \(\times\) Word Type 0.071 0.093 [-0.114, 0.255] 1.000 12546 15671
Dialect & Social \(\times\) Word Type -0.048 0.091 [-0.225, 0.130] 1.000 11664 14006
Dialect Literacy \(\times\) Word Type -0.178 0.097 [-0.369, 0.010] 1.000 12025 15427
Dialect \(\times\) Word Familiarity -0.004 0.071 [-0.143, 0.134] 1.001 8795 14221
Dialect & Social \(\times\) Word Familiarity -0.087 0.068 [-0.223, 0.047] 1.000 9496 13632
Dialect Literacy \(\times\) Word Familiarity -0.001 0.075 [-0.149, 0.145] 1.000 9006 14427
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect 0.045 0.154 [-0.255, 0.349] 1.000 7454 13363
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect & Social 0.331 0.155 [0.026, 0.634] 1.000 8094 14049
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect Literacy -0.164 0.162 [-0.482, 0.150] 1.000 7950 13077
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Word Type 0.029 0.083 [-0.135, 0.193] 1.000 14613 16737
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Word Familiarity 0.014 0.053 [-0.090, 0.117] 1.000 13943 16692
Mean Vocabulary Test nLED \(\times\) Dialect \(\times\) Word Type -0.074 0.134 [-0.336, 0.190] 1.000 12628 16385
Mean Vocabulary Test nLED \(\times\) Dialect & Social \(\times\) Word Type 0.039 0.134 [-0.224, 0.303] 1.000 11930 15432
Mean Vocabulary Test nLED \(\times\) Dialect Literacy \(\times\) Word Type 0.232 0.141 [-0.043, 0.510] 1.000 12148 15760
Mean Vocabulary Test nLED \(\times\) Dialect \(\times\) Word Familiarity 0.034 0.102 [-0.166, 0.234] 1.001 8819 14203
Mean Vocabulary Test nLED \(\times\) Dialect & Social \(\times\) Word Familiarity 0.096 0.102 [-0.105, 0.300] 1.000 9577 14850
Mean Vocabulary Test nLED \(\times\) Dialect Literacy \(\times\) Word Familiarity 0.000 0.111 [-0.215, 0.218] 1.000 8931 14479
Task \(\times\) Dialect \(\times\) Word Type -0.015 0.092 [-0.195, 0.168] 1.000 10766 14940
Task \(\times\) Dialect & Social \(\times\) Word Type -0.067 0.089 [-0.242, 0.110] 1.000 11973 15099
Task \(\times\) Dialect Literacy \(\times\) Word Type 0.153 0.096 [-0.034, 0.339] 1.000 10897 15422
Task \(\times\) Dialect \(\times\) Word Familiarity 0.029 0.060 [-0.089, 0.147] 1.000 11580 15050
Task \(\times\) Dialect & Social \(\times\) Word Familiarity -0.047 0.060 [-0.165, 0.071] 1.000 10879 15650
Task \(\times\) Dialect Literacy \(\times\) Word Familiarity 0.106 0.064 [-0.019, 0.232] 1.001 11116 15296
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect \(\times\) Word Type 0.085 0.134 [-0.180, 0.344] 1.000 10823 15344
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect & Social \(\times\) Word Type 0.103 0.132 [-0.156, 0.359] 1.000 12272 15436
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect Literacy \(\times\) Word Type -0.235 0.140 [-0.509, 0.040] 1.000 11041 15011
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect \(\times\) Word Familiarity -0.044 0.087 [-0.214, 0.126] 1.000 11680 15176
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect & Social \(\times\) Word Familiarity 0.075 0.090 [-0.101, 0.252] 1.000 11182 15369
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect Literacy \(\times\) Word Familiarity -0.177 0.095 [-0.362, 0.007] 1.001 11092 15198
\(\alpha\)
Intercept 0.582 0.238 [0.116, 1.040] 1.001 6207 11122
Mean Vocabulary Test nLED -1.397 0.310 [-2.001, -0.786] 1.001 6445 11522
Task 0.478 0.094 [0.292, 0.661] 1.001 8022 13257
Dialect 0.162 0.269 [-0.365, 0.696] 1.001 5973 9887
Dialect & Social 0.037 0.262 [-0.484, 0.549] 1.002 6309 10686
Dialect Literacy 0.122 0.286 [-0.441, 0.681] 1.002 5831 8551
Word Type 0.404 0.166 [0.074, 0.727] 1.000 10057 13912
Word Familiarity -0.084 0.111 [-0.300, 0.137] 1.001 8832 12986
Mean Vocabulary Test nLED \(\times\) Task -0.319 0.143 [-0.598, -0.033] 1.001 8085 13079
Mean Vocabulary Test nLED \(\times\) Dialect -0.141 0.400 [-0.932, 0.639] 1.001 6133 11371
Mean Vocabulary Test nLED \(\times\) Dialect & Social -0.085 0.406 [-0.880, 0.724] 1.001 6540 11330
Mean Vocabulary Test nLED \(\times\) Dialect Literacy -0.173 0.431 [-1.025, 0.674] 1.002 5886 10085
Task \(\times\) Dialect 0.232 0.148 [-0.057, 0.522] 1.001 7808 13332
Task \(\times\) Dialect & Social 0.077 0.140 [-0.196, 0.353] 1.000 7830 13202
Task \(\times\) Dialect Literacy -0.175 0.157 [-0.481, 0.132] 1.001 8239 12882
Mean Vocabulary Test nLED \(\times\) Word Type -0.441 0.137 [-0.709, -0.171] 1.000 13598 16988
Mean Vocabulary Test nLED \(\times\) Word Familiarity 0.101 0.103 [-0.101, 0.302] 1.000 9506 14408
Task \(\times\) Word Type 0.205 0.062 [0.084, 0.328] 1.000 16890 18011
Task \(\times\) Word Familiarity -0.132 0.039 [-0.208, -0.055] 1.000 16488 17445
Dialect \(\times\) Word Type -0.172 0.118 [-0.405, 0.063] 1.000 12073 15275
Dialect & Social \(\times\) Word Type -0.111 0.112 [-0.332, 0.111] 1.000 12304 15479
Dialect Literacy \(\times\) Word Type 0.063 0.124 [-0.179, 0.308] 1.001 11352 15705
Dialect \(\times\) Word Familiarity -0.071 0.092 [-0.252, 0.111] 1.001 8382 13367
Dialect & Social \(\times\) Word Familiarity -0.069 0.088 [-0.244, 0.103] 1.001 9191 13857
Dialect Literacy \(\times\) Word Familiarity 0.124 0.099 [-0.068, 0.322] 1.001 9004 13690
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect -0.230 0.222 [-0.665, 0.201] 1.001 7853 12887
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect & Social -0.139 0.219 [-0.571, 0.293] 1.000 8249 13657
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect Literacy 0.278 0.239 [-0.190, 0.745] 1.001 8353 13431
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Word Type -0.207 0.094 [-0.391, -0.023] 1.000 16241 17929
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Word Familiarity 0.164 0.060 [0.047, 0.280] 1.000 16659 17691
Mean Vocabulary Test nLED \(\times\) Dialect \(\times\) Word Type 0.169 0.176 [-0.177, 0.514] 1.000 11821 15283
Mean Vocabulary Test nLED \(\times\) Dialect & Social \(\times\) Word Type 0.152 0.174 [-0.193, 0.494] 1.000 12112 15809
Mean Vocabulary Test nLED \(\times\) Dialect Literacy \(\times\) Word Type -0.233 0.189 [-0.606, 0.134] 1.001 11389 16455
Mean Vocabulary Test nLED \(\times\) Dialect \(\times\) Word Familiarity 0.036 0.138 [-0.236, 0.308] 1.001 8574 14145
Mean Vocabulary Test nLED \(\times\) Dialect & Social \(\times\) Word Familiarity 0.091 0.137 [-0.179, 0.362] 1.001 9181 13600
Mean Vocabulary Test nLED \(\times\) Dialect Literacy \(\times\) Word Familiarity -0.095 0.152 [-0.396, 0.199] 1.001 8822 13551
Task \(\times\) Dialect \(\times\) Word Type -0.221 0.105 [-0.429, -0.021] 1.001 12942 15620
Task \(\times\) Dialect & Social \(\times\) Word Type 0.190 0.099 [-0.004, 0.386] 1.001 13698 17378
Task \(\times\) Dialect Literacy \(\times\) Word Type -0.031 0.112 [-0.250, 0.185] 1.000 12826 16411
Task \(\times\) Dialect \(\times\) Word Familiarity -0.043 0.066 [-0.171, 0.087] 1.000 13930 16447
Task \(\times\) Dialect & Social \(\times\) Word Familiarity 0.066 0.062 [-0.057, 0.189] 1.000 13637 16390
Task \(\times\) Dialect Literacy \(\times\) Word Familiarity -0.144 0.072 [-0.287, -0.003] 1.000 12290 15462
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect \(\times\) Word Type 0.233 0.156 [-0.071, 0.541] 1.001 12904 15165
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect & Social \(\times\) Word Type -0.263 0.155 [-0.565, 0.040] 1.000 13840 17030
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect Literacy \(\times\) Word Type 0.067 0.171 [-0.264, 0.406] 1.000 12847 16596
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect \(\times\) Word Familiarity 0.087 0.099 [-0.108, 0.279] 1.000 13951 15637
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect & Social \(\times\) Word Familiarity -0.151 0.097 [-0.341, 0.040] 1.000 13658 16651
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect Literacy \(\times\) Word Familiarity 0.268 0.110 [0.053, 0.484] 1.000 12338 15805
\(\gamma\)
Intercept -5.895 0.637 [-7.162, -4.662] 1.003 2657 6409
Mean Vocabulary Test nLED 3.152 0.843 [1.492, 4.809] 1.001 4206 9626
Task -0.511 0.393 [-1.281, 0.270] 1.001 10075 14987
Dialect -0.245 0.611 [-1.436, 0.944] 1.001 3262 7838
Dialect & Social 0.046 0.608 [-1.144, 1.238] 1.002 3262 7675
Dialect Literacy 0.130 0.615 [-1.096, 1.326] 1.001 3126 7861
Word Type -0.423 0.359 [-1.127, 0.296] 1.000 14415 16572
Word Familiarity 0.367 0.323 [-0.282, 0.983] 1.000 10289 15030
Mean Vocabulary Test nLED \(\times\) Task 0.255 0.541 [-0.809, 1.316] 1.001 12258 15498
Mean Vocabulary Test nLED \(\times\) Dialect -0.658 0.800 [-2.222, 0.902] 1.001 5961 11940
Mean Vocabulary Test nLED \(\times\) Dialect & Social 0.373 0.811 [-1.228, 1.969] 1.002 5441 11479
Mean Vocabulary Test nLED \(\times\) Dialect Literacy 0.115 0.807 [-1.481, 1.687] 1.000 6310 12317
Task \(\times\) Dialect 0.294 0.466 [-0.628, 1.203] 1.000 12681 16437
Task \(\times\) Dialect & Social -0.053 0.465 [-0.966, 0.849] 1.000 13472 17028
Task \(\times\) Dialect Literacy 0.182 0.464 [-0.731, 1.088] 1.000 12095 16536
Mean Vocabulary Test nLED \(\times\) Word Type -0.043 0.512 [-1.068, 0.964] 1.000 14472 16650
Mean Vocabulary Test nLED \(\times\) Word Familiarity -0.495 0.468 [-1.426, 0.419] 1.000 12431 15540
Task \(\times\) Word Type -0.158 0.299 [-0.743, 0.431] 1.001 15499 17441
Task \(\times\) Word Familiarity 0.612 0.260 [0.102, 1.120] 1.000 13680 16359
Dialect \(\times\) Word Type 0.119 0.432 [-0.727, 0.964] 1.000 14184 15990
Dialect & Social \(\times\) Word Type -0.240 0.429 [-1.084, 0.599] 1.000 14480 16847
Dialect Literacy \(\times\) Word Type 0.150 0.441 [-0.705, 1.012] 1.001 14962 16635
Dialect \(\times\) Word Familiarity -0.382 0.408 [-1.183, 0.424] 1.000 11711 16109
Dialect & Social \(\times\) Word Familiarity 0.488 0.397 [-0.289, 1.267] 1.000 12021 14108
Dialect Literacy \(\times\) Word Familiarity 0.115 0.407 [-0.680, 0.917] 1.000 12684 15885
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect 0.313 0.655 [-0.963, 1.600] 1.000 14261 17238
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect & Social -0.472 0.667 [-1.786, 0.843] 1.000 14551 16737
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect Literacy -0.223 0.668 [-1.537, 1.077] 1.000 14374 17169
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Word Type 0.242 0.431 [-0.599, 1.087] 1.001 15568 17338
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Word Familiarity -0.751 0.394 [-1.536, 0.011] 1.000 13804 16781
Mean Vocabulary Test nLED \(\times\) Dialect \(\times\) Word Type 0.035 0.605 [-1.150, 1.221] 1.000 14573 15758
Mean Vocabulary Test nLED \(\times\) Dialect & Social \(\times\) Word Type 0.704 0.612 [-0.490, 1.904] 1.000 14716 16716
Mean Vocabulary Test nLED \(\times\) Dialect Literacy \(\times\) Word Type -0.197 0.629 [-1.434, 1.029] 1.000 15105 16416
Mean Vocabulary Test nLED \(\times\) Dialect \(\times\) Word Familiarity 0.398 0.589 [-0.762, 1.552] 1.000 13039 15757
Mean Vocabulary Test nLED \(\times\) Dialect & Social \(\times\) Word Familiarity -0.668 0.578 [-1.814, 0.461] 1.000 13289 15567
Mean Vocabulary Test nLED \(\times\) Dialect Literacy \(\times\) Word Familiarity -0.120 0.598 [-1.305, 1.052] 1.000 13911 16592
Task \(\times\) Dialect \(\times\) Word Type 0.495 0.409 [-0.303, 1.304] 1.000 14738 17135
Task \(\times\) Dialect & Social \(\times\) Word Type 0.014 0.388 [-0.747, 0.771] 1.000 14793 16713
Task \(\times\) Dialect Literacy \(\times\) Word Type 0.328 0.410 [-0.468, 1.131] 1.000 14522 17347
Task \(\times\) Dialect \(\times\) Word Familiarity 0.782 0.369 [0.071, 1.517] 1.000 12302 15574
Task \(\times\) Dialect & Social \(\times\) Word Familiarity -0.684 0.366 [-1.397, 0.033] 1.001 13911 16526
Task \(\times\) Dialect Literacy \(\times\) Word Familiarity -0.102 0.349 [-0.789, 0.590] 1.000 14012 16597
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect \(\times\) Word Type -0.436 0.565 [-1.551, 0.674] 1.000 14690 16983
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect & Social \(\times\) Word Type -0.058 0.565 [-1.167, 1.055] 1.000 14979 17154
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect Literacy \(\times\) Word Type -0.563 0.590 [-1.718, 0.586] 1.000 14644 17017
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect \(\times\) Word Familiarity -0.390 0.536 [-1.442, 0.658] 1.000 12744 16179
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect & Social \(\times\) Word Familiarity 0.443 0.546 [-0.632, 1.497] 1.001 14418 17411
Mean Vocabulary Test nLED \(\times\) Task \(\times\) Dialect Literacy \(\times\) Word Familiarity 0.056 0.535 [-0.995, 1.115] 1.000 14104 17489

G. Visualisation of Response Types

We estimated the proportion of response types aggregated across participants.

Mean proportion of response types. Response types are: correct (e.g. target: *kuble*--response: *kuble*); dialect word match: the dialect variant is produced in response to the corresponding standard contrastive word (e.g. target: *kuble*--response: *xuble*); dialect word misatch: a dialect variant is produced in response to another standard contrastive word (e.g. target: *skefi*--response: *xuble*); standard word mismatch: a standard word is produced in response to another standard word (e.g. target: *skefi*--response: *kuble*); other mismatch: any other error that was not part of the response set.

Mean proportion of response types. Response types are: correct (e.g. target: kuble–response: kuble); dialect word match: the dialect variant is produced in response to the corresponding standard contrastive word (e.g. target: kuble–response: xuble); dialect word misatch: a dialect variant is produced in response to another standard contrastive word (e.g. target: skefi–response: xuble); standard word mismatch: a standard word is produced in response to another standard word (e.g. target: skefi–response: kuble); other mismatch: any other error that was not part of the response set.

References

Donaldson, J. (2005). The Gruffalo’s Child. Pan Macmillan.

Milde, B. (2011). Shapecatcher: Unicode character recognition. Retrieved from http://shapecatcher.com/

Rossion, B., & Pourtois, G. (2004). Revisiting Snodgrass and Vanderwart’s object pictorial set: The role of surface detail in basic-level object recognition. Perception, 33(2), 217–236. doi:10.1068/p5117

Vasishth, S., Mertzen, D., Jäger, L. A., & Gelman, A. (2018). The statistical significance filter leads to overoptimistic expectations of replicability. Journal of Memory and Language, 103(August), 151–175. doi:10.1016/j.jml.2018.07.004

Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., Bürkner, P.-C., & others. (2020). Bayesian Analysis.

Williams, G. P., Panayotov, N., & Kempe, V. (2020). How does dialect exposure affect learning to read and spell? An artificial orthography study. Journal of Experimental Psychology: General.